Conversation
Introduces a parameter resolver that expands ${var.x}, ${env.X},
${csv.source.column}, ${faker.method}, and built-in helpers
(uuid, now, randint) inside any nested dict/list/string structure.
Faker is loaded lazily so it remains an optional dependency.
Both HTTP user templates now use a shared request executor that: - Accepts list-of-tasks form alongside the legacy method-keyed dict - Forwards headers, params, cookies, json/data, timeout, redirects, files - Applies basic and bearer auth shorthands - Runs assertions (status_code, contains, json_path, header) under catch_response, marking failures via Locust's response handle - Extracts response values into the parameter resolver for reuse Proxy configure() now tolerates extra kwargs so callers can pass variables and csv_sources for placeholder resolution.
Tasks payloads can now declare a mode and per-task weight or run_if/skip_if guard. Sequence runs every task in order (default), weighted picks one task per cycle by weight, and conditional uses parameter-resolver-aware predicates so a step can depend on values extracted earlier in the run.
WebSocketUserWrapper drives connect/send/recv/sendrecv/close steps defined in the same task list shape as the HTTP users. Uses websocket-client lazily so it stays an optional dependency, fires Locust request events with timing for stat aggregation, and supports substring-style response assertions.
GrpcUserWrapper drives unary calls described by stub_path, request_path, method, and payload. Channel is reused across tasks on the same target. grpcio is imported lazily so it remains an optional dependency, and metadata accepts both list-of-pairs and dict shapes.
MqttUserWrapper drives connect/publish/subscribe/disconnect steps against an MQTT broker via paho-mqtt (lazy import). Reuses a single client across tasks per broker, runs publishes synchronously with a timeout, and emits Locust events tagged MQTT for stats.
SocketUserWrapper drives raw send/recv over TCP or UDP using stdlib sockets, supports hex: payload prefix, bounded reads, and optional substring assertions. Each step fires a Locust event tagged TCP or UDP for stat aggregation.
prepare_env and start_test accept runner_mode (local/master/worker) plus master_bind_*/master_host/master_port/expected_workers. Master mode optionally waits for the expected number of workers before launching the test; worker mode joins an existing master and skips local stats greenlets and Web UI.
start_prometheus_exporter spins up a prometheus_client HTTP endpoint and registers a Locust request listener that updates a request counter, latency histogram, and response-size histogram labeled by request_type, name, and outcome. prometheus_client is loaded lazily so the dependency stays optional.
start_influxdb_sink subscribes to Locust request events and writes each as a line-protocol point. Transport defaults to UDP for fire-and-forget; HTTP transport accepts a caller-supplied URL plus optional token. Tags carry request_type and name; fields carry latency, response bytes, success flag, and error repr.
start_opentelemetry_exporter wires Locust request events to OTel counters and histograms (requests, latency, size) and exports via OTLP gRPC at the configured endpoint. SDK and exporter packages are imported lazily so the dependency stays optional and missing pieces log a warning instead of crashing.
The socket server now supports a 4-byte length-prefixed frame
format and optional TLS via cert/key files. A shared-secret token
(from arg or LOAD_DENSITY_SOCKET_TOKEN) gates privileged commands;
once configured, every payload must include the token under the
new {token, command, op?} envelope. Quit is rejected without a
valid token. Legacy unauthenticated mode remains the default for
backwards compatibility.
The flat -e/-d/-c/--execute_str flags are kept as suppressed fallbacks for backwards compatibility, but the documented surface is now subcommand-based and serve exposes the new framed/token/TLS options for the hardened socket server.
har_to_tasks turns a HAR log into a list of LoadDensity HTTP tasks and har_to_action_json wraps it as a runnable action JSON. Filters entries by include/exclude regex over URL, strips hop-by-hop and HTTP/2 pseudo headers, decodes JSON bodies into the json field, and preserves the original status code as a status_code assertion.
The request hook now records response_time_ms and response_length on every record. Three new generators consume them: - generate_csv_report writes one row per request (success+failure) - generate_junit_report emits a JUnit testsuite for CI consumption - generate_summary_report builds totals plus p50/p90/p95/p99 per request name and overall, ready for charting or regression checks.
persist_records writes the in-memory success/failure records as a new run row plus one record row per request. list_runs and fetch_run_records expose the history. Schema is created lazily so new SQLite files work without manual setup.
StatsPanel polls test_record_instance once per second and renders total requests, current rate, average and p95 latency, and failure count. Wired into LoadDensityWidget between the start button and log view. English and traditional Chinese dictionaries gain the new stats keys.
Both new dictionaries cover the full set of widget labels and the new live stats panel keys. LanguageWrapper now selects from English, Traditional_Chinese, Japanese, and Korean.
je_load_density.mcp_server publishes the load test surface (run_test, run_action_json, create_project, import_har, generate_reports, summary, persist_records/list_runs/fetch_run, clear_records, list_executor_commands) over stdio. The mcp SDK is loaded lazily so the dependency is opt-in; missing it surfaces a clear install hint. Run with: python -m je_load_density.mcp_server
The action executor now exposes 33 LD_* commands covering reports (html/json/xml/csv/junit/summary), the parameter resolver (register/clear), HAR import, SQLite persistence (persist/list/fetch), the metrics exporters (prometheus/influx/otel start+stop), and a hardened start_socket_server entry. The package __init__ re-exports them so callers can drive LoadDensity programmatically. The socket-server registration is lazy to avoid a circular import.
Group the optional dependencies introduced by the recent feature work into install extras so users can opt in cleanly: pip install je_load_density[mqtt] # paho-mqtt pip install je_load_density[grpc] # grpcio + protobuf pip install je_load_density[websocket] # websocket-client pip install je_load_density[metrics] # prometheus + otel pip install je_load_density[mcp] # mcp SDK pip install je_load_density[faker] pip install je_load_density[all] Each runtime module already imports these lazily, so the base install footprint is unchanged.
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 462 |
| Duplication | 21 |
NEW Get contextual insights on your PRs based on Codacy's metrics, along with PR and Jira context, without leaving GitHub. Enable AI reviewer
TIP This summary will be updated as you push new changes.
- Bandit B104: bind Prometheus exporter to 127.0.0.1 by default, document the explicit 0.0.0.0 opt-in for container/remote setups. - Bandit B310: validate InfluxDB HTTP URL scheme is http(s) before urlopen, both at start_influxdb_sink configuration time and at every send. Annotate the urlopen with nosec B310 + rationale. - Bandit B110: replace bare 'except Exception: pass' cleanup swallowers across the metrics exporters (Prometheus, InfluxDB, OpenTelemetry) and the WebSocket / MQTT / gRPC user templates with debug-level log lines so cleanup failures stay observable. - Semgrep non-literal-import: validate the dotted path against a strict identifier regex before importlib.import_module in the gRPC user, and tag the call with nosemgrep + rationale (the framework genuinely needs to load operator-authored stubs). - pyflakes F401: drop the unused Callable import from parameter_resolver.py. Bandit run on the touched modules now reports zero findings; pytest test/ still passes (84 cases).
- githubactions:S7631 (publish-pypi.yml workflow_run gating): Tighten the trigger so the publish job only runs when the upstream CI completed on the main branch and was not itself a pull_request event. Check out workflow_run.head_sha instead of the moving main ref so we publish exactly the commit that passed CI, and push the version-bump commit via HEAD:refs/heads/main so a concurrent push to main fails fast as non-fast-forward rather than silently overwriting newer history. - python:S5332 (influxdb_sink http literal heuristic): Rename the helper from _send_http to _post_line_protocol. The scheme allowlist already permits both http:// and https://; the Sonar heuristic flagged the literal 'http' in the function name, which wasn't actually a configuration. The new name documents intent (POST one line-protocol record) without the literal.
Add a yaml.github-actions.security.workflow-run-target-code-checkout nosemgrep marker on the actions/checkout step in publish-pypi.yml. The rule fires on any workflow_run+checkout combination, but the job's if-clause already gates on workflow_run.head_branch == 'main' and workflow_run.event != 'pull_request', so a fork PR head can never reach this checkout. The check is supplemented by pinning the ref to workflow_run.head_sha.
Semgrep's inline-ignore directive only applies when the comment immediately precedes the matching line, with no other comments in between. Move the nosemgrep tag to the line above the actions/checkout step and keep the rationale comments above it.
Cognitive complexity (python:S3776) — extract helpers so each function stays at or below 15: - request_executor._check_assertions: split per assertion type into _ASSERTION_HANDLERS lookup with one helper per kind. - scenario_runner._eval_condition: dispatch operators via the new _CONDITION_OPS table. - mqtt/websocket _do_step: split into _dispatch_step plus per-method helpers (_publish, _subscribe, _send_only, _recv_only, etc.). - har_importer._entry_to_task: factor out _extract_request_headers and _attach_post_body. - socket_server.handle: factor out _handle_legacy_quit, _authorise_payload, and _dispatch_command. - influxdb_sink.start_influxdb_sink: factor _validate_transport and _build_listener. Other findings: - python:S3516 on start_influxdb_sink and __main__.main: stop always-returning the same value (drop the redundant True from the sink, replace the unreachable raise in main with a print + exit code 2 so the return is honest). - python:S4423 on the TLS server context: pin minimum_version to TLSv1_2 so older suites cannot be negotiated. - python:S1192 on the response terminator: extract _RESPONSE_TERMINATOR (and _AUTH_FAILED sentinel) instead of duplicating the byte string. - python:S1172 on execute_task/execute_tasks/run_scenario: drop the unused 'client' parameter; HTTP and FastHttp callers updated. - python:S6353 on the dotted-path and function regexes: collapse [A-Za-z0-9_] to \w in parameter_resolver and grpc_user_template. - python:S117 + python:S1481 in mcp_server: rename the Server local to server_cls in build_server, and drop the unused Server binding from run_stdio. bandit clean on the full package; pytest test/ — 84 passed.
SonarCloud python:S4423 wants the stdlib helper instead of a bare SSLContext(PROTOCOL_TLS_SERVER); create_default_context selects the hardened cipher suite list, disables compression, and locks in TLS 1.2 minimum (we keep the explicit minimum_version pin as a belt-and-braces check).
Sonar's S4423 heuristic still flags any direct ssl context creation on the server side; add an inline NOSONAR with rationale referencing the explicit TLS 1.2 minimum pin and create_default_context's hardened cipher defaults.
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.



Summary
This release widens LoadDensity from an HTTP-only Locust wrapper into a multi-protocol load framework with parameterised scenarios, exporters, persistence, and an MCP control surface. All new heavy dependencies (gRPC, MQTT, WebSocket, Prometheus, OpenTelemetry, MCP, Faker) are loaded lazily and surface clear install hints when missing, so the base install footprint is unchanged.
New protocols
Data parameterisation
Distributed runners and observability
Reports and persistence
Operability and tooling
Executor registry
Packaging
Test plan